[ Info: Loading sequence spiral.seq ...
23 Jul 2025
MRI, like robotics, has high trial and error cost:
Can we create a digital playground to answer:
How can we get faster and more useful images?
Study in collaboration with Anastasia Fotaki, KCL. To be submitted.
Free-running T1 and T2 mapping at 0.55T by Diego Pedraza, UC. To be submitted.
Describe the evolution of the magnetization: \[ \frac{\mathrm{d}\vec{M}}{\mathrm{d}t} = \gamma \vec{M} \times \vec{B}(t) - \left( \frac{M_x}{T_2}, \frac{M_y}{T_2}, \frac{M_z - M_0}{T_1} \right) \]
\[ \Large \frac{\mathrm{d}\vec{M}}{\mathrm{d}t} = \mathrm{bloch}(t, \vec{M}) \]
How can we solve this for 10,000 spins?
(100x100)
How can we solve this for 100,000,000 spins?
(460x460x460)
Results can be exported to .mat.
[ Info: Loading sequence spiral.seq ...
# Format of blocks:
# NUM DUR RF GX GY GZ ADC EXT
[BLOCKS]
1 1621 1 0 0 1 0 0
2 319 2 0 0 2 0 0
3 2245 0 4 5 3 1 0
4 104 0 7 8 6 0 0
# Format of RF events:
# id amplitude mag_id phase_id time_shape_id delay freq phase
# .. Hz .... .... .... us Hz rad
[RF]
1 129.712 1 2 0 100 -424.504 0
2 329.152 3 4 0 100 0 0
# Format of arbitrary gradients:
# time_shape_id of 0 means default timing (stepping with grad_raster starting at 1/2 of grad_raster)
# id amplitude amp_shape_id time_shape_id delay
# .. Hz/m .. .. us
[GRADIENTS]
4 -773910 5 0 790
5 771376 6 0 790
7 -773910 7 8 0
8 32610 7 8 0
# Format of trapezoid gradients:
# id amplitude rise flat fall delay
# .. Hz/m us us us us
[TRAP]
1 1.27714e+06 250 7580 250 8130
2 444444 90 3000 90 10
3 -1.2716e+06 250 290 250 0
6 1.26582e+06 250 540 250 0
# Format of ADC events:
# id num dwell delay freq phase
# .. .. ns us Hz rad
[ADC]
1 12000 1700 790 0 0
# Sequence Shapes
[SHAPES]
shape_id 1
num_samples 8000
0.00011671692
0.000117246598
0.000117778547
0.000118312775
...
0.0422751249
shape_id 7
num_samples 2
1
0
shape_id 8
num_samples 2
0
104
[SIGNATURE]
# This is the hash of the Pulseq file, calculated right before the [SIGNATURE] section was added
# It can be reproduced/verified with md5sum if the file trimmed to the position right above [SIGNATURE]
# The new line character preceding [SIGNATURE] BELONGS to the signature (and needs to be sripped away for recalculating/verification)
Type md5
Hash efc5eb7dbaa82aba627a31ff689c8649| CPU | GPU1 | GPU2 | |
|---|---|---|---|
| Name | Intel i7-1165G7 | GTX 1650 Ti | RTX 2080 Ti |
| JEMRIS | \(\approx7\,\mathrm{min}\) | - | - |
| MRiLab | \(1.56\,\mathrm{s} \pm 0.07\,\mathrm{s}\) | \(0.84\,\mathrm{s} \pm 0.02\,\mathrm{s}\) | \(0.91\,\mathrm{s} \pm 0.02\,\mathrm{s}\) |
| Koma | \(1.82\,\mathrm{s} \pm 0.17\,\mathrm{s}\) | \(0.32\,\mathrm{s} \pm 0.02\,\mathrm{s}\) | \(0.15\,\mathrm{s} \pm 0.01\,\mathrm{s}\) |
We achieved performance improvements and memory reductions.
Koma is compatible with HPC and SLURM pipelines:
using Distributed, CUDA
addprocs(length(devices())) # One process per GPU
@everywhere begin
using KomaMRI, CUDA
sys = Scanner()
seq = PulseDesigner.EPI_example()
obj = brain_phantom2D()
parts = kfoldperm(length(obj), nworkers()) # [1:10, 11:20, 21:30]
end
# Simulation
raw = @distributed (+) for i=1:nworkers()
KomaMRICore.set_device!(i-1) #Sets device for this worker
simulate(obj[parts[i]], seq, sys)
end#!/bin/bash
#SBATCH --job-name KomaDistributed # Job name
#SBATCH -t 0-00:30 # Max runtime for job
#SBATCH -p batch # Enter partition on which to run the job
#SBATCH --ntasks=1 # 1 task
#SBATCH --cpus-per-task=1 # Request 1 CPU
#SBATCH --gpus=4 # Request 4 GPUs
#SBATCH -o /mnt/workspace/%u/slurm-out/%test.out # Enter file path to write stdout to
#SBATCH -e /mnt/workspace/%u/slurm-out/%test.err # Enter file path to write stderr to
module load julia/1.10.2
julia script.jlMotion’sPath’sFlowPath can use CFD-generated particle trajectoriesKomaMRI 9 min vs CMRsim 39 min: